翻訳と辞書
Words near each other
・ Vector Motors
・ Vector notation
・ Vector NTI
・ Vector operator
・ Vector optimization
・ Vector packs
・ Vector Pascal
・ Vector path
・ Vector Pipeline
・ Vector potential
・ Vector Prime
・ Vector Prime (Transformers)
・ Vector processor
・ Vector Product Format
・ Vector projection
Vector quantization
・ Vector radiative transfer
・ Vector Security
・ Vector signal analyzer
・ Vector slime
・ Vector Software
・ Vector soliton
・ Vector space
・ Vector space model
・ Vector spaces without fields
・ Vector spherical harmonics
・ Vector srv8
・ Vector sum excited linear prediction
・ Vector synthesis
・ Vector tiles


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Vector quantization : ウィキペディア英語版
Vector quantization
Vector quantization (VQ) is a classical quantization technique from signal processing that allows the modeling of probability density functions by the distribution of prototype vectors. It was originally used for data compression. It works by dividing a large set of points (vectors) into groups having approximately the same number of points closest to them. Each group is represented by its centroid point, as in k-means and some other clustering algorithms.
The density matching property of vector quantization is powerful, especially for identifying the density of large and high-dimensioned data. Since data points are represented by the index of their closest centroid, commonly occurring data have low error, and rare data high error. This is why VQ is suitable for lossy data compression. It can also be used for lossy data correction and density estimation.
Vector quantization is based on the competitive learning paradigm, so it is closely related to the self-organizing map model and to sparse coding models used in deep learning algorithms such as autoencoder.
== Training ==
A simple training algorithm for vector quantization is:
# Pick a sample point at random
# Move the nearest quantization vector centroid towards this sample point, by a small fraction of the distance
# Repeat
A more sophisticated algorithm reduces the bias in the density matching estimation, and ensures that all points are used, by including an extra sensitivity parameter:
# Increase each centroid's sensitivity by a small amount
# Pick a sample point at random
# Find the quantization vector centroid with the smallest
## Move the chosen centroid toward the sample point by a small fraction of the distance
## Set the chosen centroid's sensitivity to zero
# Repeat
It is desirable to use a cooling schedule to produce convergence: see Simulated annealing. Another (simpler) method is LBG which is based on K-Means.
The algorithm can be iteratively updated with 'live' data, rather than by picking random points from a data set, but this will introduce some bias if the data are temporally correlated over many samples.
A vector is represented either geometrically by an arrow whose length corresponds to its magnitude and points in an appropriate direction, or by two or three numbers representing the magnitude of its components.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Vector quantization」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.